ADMM Algorithm for Graphical Lasso with an $\ell_{\infty}$ Element-wise Norm Constraint

نویسنده

  • Karthik Mohan
چکیده

We consider the problem of Graphical lasso with an additional l∞ element-wise norm constraint on the precision matrix. This problem has applications in high-dimensional covariance decomposition such as in [Janzamin and Anandkumar, 2012a]. We propose an ADMM algorithm to solve this problem. We also use a continuation strategy on the penalty parameter to have a fast implemenation of the algorithm. 1 Problem The graphical lasso formulation with l∞ element-wise norm constraint is as follows: min Θ∈R,Θ≻0 − log det(Θ) + 〈S,Θ〉+ γ‖Θ− diag(Θ)‖1 s.t. ‖Θ− diag(Θ)‖∞ ≤ λ, (1) where ‖ · ‖1 denotes the l1 norm, and ‖ · ‖∞ denotes the l∞ element-wise norm of a matrix. For a matrix X , ‖X‖∞ = max i,j |Xij |. This formulation first appeared in [Janzamin and Anandkumar, 2012a] in the context of high-dimensional covariance decomposition. We next provide an efficient ADMM algorithm to solve (1). 2 ADMM approach The alternating direction method of multipliers (ADMM) algorithm [Boyd et al., 2011, Eckstein, 2012] is especially suited to solve optimization problems whose objective can be decomposed into the sum of many simple convex functions. By simple, we mean a function whose proximal operator can be computed efficiently. The proximal operator of a function f is given by: Proxf (A, λ) = argmin X 1 2 ‖X −A‖F + λf(X) (2) Consider the following optimization problem: min X,Y f(X) + g(Y ) s.t. X = Y , (3) where we assume that f and g are simple functions. The ADMM algorithm alternatively optimizes the augmented Lagrangian to (3), which is given by: Lρ(X,Y ,Λ) = f(X) + g(Y ) + 〈Λ,X − Y 〉+ ρ 2 ‖X − Y ‖F . (4) The (k + 1)th iteration of ADMM is then given by:

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Faster algorithms for SVP and CVP in the $\ell_{\infty}$ norm

Blomer and Naewe[BN09] modified the randomized sieving algorithm of Ajtai, Kumar and Sivakumar[AKS01] to solve the shortest vector problem (SVP). The algorithm starts with $N = 2^{O(n)}$ randomly chosen vectors in the lattice and employs a sieving procedure to iteratively obtain shorter vectors in the lattice. The running time of the sieving procedure is quadratic in $N$. We study this problem ...

متن کامل

A PROJECT SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA DULUTH BY Xingguo Li IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN APPLIED AND COMPUTATIONAL MATHEMATICS

In this project, we discuss high-dimensional regression, where the dimension of the multivariate distribution is larger than the sample size, i.e. d n. With the assumption of sparse structure of the underlying multivariate distribution, we take the advantage of the `1 regularized method for parameter estimation. There are two major problems that will be discussed in this project: (1) a family o...

متن کامل

Exclusive Feature Learning on Arbitrary Structures via \ell_{1, 2}-norm

Group LASSO is widely used to enforce the structural sparsity, which achieves the sparsity at the inter-group level. In this paper, we propose a new formulation called “exclusive group LASSO”, which brings out sparsity at intra-group level in the context of feature selection. The proposed exclusive group LASSO is applicable on any feature structures, regardless of their overlapping or non-overl...

متن کامل

The $\ell_\infty$ Perturbation of HOSVD and Low Rank Tensor Denoising

The higher order singular value decomposition (HOSVD) of tensors is a generalization of matrix SVD. The perturbation analysis of HOSVD under random noise is more delicate than its matrix counterpart. Recent progress has been made in Richard and Montanari [2014], Zhang and Xia [2017] and Liu et al. [2017] demonstrating that minimax optimal singular spaces estimation and low rank tensor recovery ...

متن کامل

Meta-Path Graphical Lasso for Learning Heterogeneous Connectivities

Sparse inverse covariance estimation has attracted lots of interests since it can recover the structure of the underlying Gaussian graphical model. This is a useful tool to demonstrate the connections among objects (nodes). Previous works on sparse inverse covariance estimation mainly focus on learning one single type of connections from the observed activities with a lasso, group lasso or tree...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013